Combining Multiple Classifiers with Dynamic Weighted Voting

نویسندگان

  • Rosa Maria Valdovinos
  • José Salvador Sánchez
چکیده

When a multiple classifier system is employed, one of the most popular methods to accomplish the classifier fusion is the simple majority voting. However, when the performance of the ensemble members is not uniform, the efficiency of this type of voting generally results affected negatively. In the present paper, new functions for dynamic weighting in classifier fusion are introduced. Experimental results with several real-problem data sets from the UCI Machine Learning Database Repository demonstrate the advantages of these novel weighting strategies over the simple voting scheme.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bagging and Boosting with Dynamic Integration of Classifiers

One approach in classification tasks is to use machine learning techniques to derive classifiers using learning instances. The cooperation of several base classifiers as a decision committee has succeeded to reduce classification error. The main current decision committee learning approaches boosting and bagging use resampling with the training set and they can be used with different machine le...

متن کامل

Learn++.NC: Combining Ensemble of Classifiers With Dynamically Weighted Consult-and-Vote for Efficient Incremental Learning of New Classes

We have previously introduced an incremental learning algorithm Learn(++), which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn(++) suffers from an inherent "outvoting" problem when asked to learn a new class omega(new) introduced by a subsequent data set, as earlier ...

متن کامل

An Experimental Study of Methods Combining Multiple Classifiers - Diversified both by Feature Selection and Bootstrap Sampling

Ensemble approaches are learning algorithms that construct a set of classifiers and then classify new instances by combining their predictions. These approaches can outperform single classifiers on wide range of classification problems. In this paper we proposed an extension of the bagging classifier integrating it with feature subset selection. Moreover, we examined the usage of other methods ...

متن کامل

BittenPotato: Tweet Sentiment Analysis by Combining Multiple Classifiers

In this paper, we use a bag-of-words of n-grams to capture a dictionary containing the most used ”words” which we will use as features. We then proceed to classify using four different classifiers and combine their results by apply a voting, a weighted voting and a classifier to obtain the real polarity of a phrase.

متن کامل

Incremental Learning of Ultrasonic Weld Inspection Signals

In this paper, we present LEARN++, a new algorithm that allows existing classifiers to learn incrementally from new data without forgetting previously acquired knowledge. LEARN++ is based on generating multiple classifiers, each trained with a different subset of the training data and then combining them to form an ensemble of classifiers using weighted majority voting. The fundamental contribu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009